Translate

Visualizzazione dei post in ordine di pertinenza per la query racism. Ordina per data Mostra tutti i post
Visualizzazione dei post in ordine di pertinenza per la query racism. Ordina per data Mostra tutti i post

giovedì 11 giugno 2020

# life: apropos of widespread racism, three in four people (probably even more than three) hold unconscious negative view of world: study

<< Most Australians tested for unconscious bias hold a negative view of Indigenous Australians which can lead to widespread racism, new analysis from The Australian National University (ANU) shows. >>

<< The ANU researchers say 75 per cent of Australians tested using the Implicit Association Test by a joint initiative of universities including Harvard, Yale and the University of Sydney hold a negative implicit or unconscious bias against Indigenous Australians. >>

Three in four people hold negative view of Indigenous people: study. Australian National University. June 9, 2020.


Siddharth Shirodkar. Bias against Indigenous Australians: Implicit association test results for Australia. Journal of Australian Indigenous Issues. Vol 22 Issue 3-4.  Dec 2019.


Also

# brain: immagini #CTZ, cicliche fantasmatiche pervasive visioni ricorrenti. FonT. Jun 29, 2019. 


sabato 20 giugno 2020

# life: unaware racisms

<< racism is about action in everyday life, not just words or hashtags at a time of uprising. We can be careful about what we say – language is conscious and controllable. But it is perfectly possible to hold deep-seated racist views, sometimes subconsciously, and simultaneously announce you are definitely not racist. >>

Geoff Beattie. Black Lives Matter: you may be a vocal supporter and still hold racist views. Jun 17, 2020


Also

keyword "CTZ" in FonT



giovedì 23 maggio 2019

# ai: apropos of black box approach in machine learning algorithms

<< A black box is a machine learning program that does not explain how it reaches its conclusions, either because it is too complicated for a human to understand or because its inner workings are proprietary. In response to concerns that these types of models may include unjust inner workings—such as racism—another growing trend is to create additional models to "explain" these black boxes. >>

<<  Even when so-called explanation models are created, (..) decision-makers should be opting for interpretable models, which are completely transparent and easily understood by its users. >>

Ken Kingery. Stop gambling with black box and explainable models on high-stakes decisions. Duke University.  May 14, 2019.

https://m.techxplore.com/news/2019-05-gambling-black-high-stakes-decisions.html  

Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence. volume 1, pages 206–215 May 13, 2019

https://www.nature.com/articles/s42256-019-0048-x